#Shitty ai
Explore tagged Tumblr posts
Text
12 notes
·
View notes
Text
if there’s anything i’ve come to appreciate from the proliferation of ai images on the internet it’s how much i actually enjoy imperfect art. paintings, drawings, photographs, whatever. those imperfections aren’t flaws, they’re character, a layer of beauty in and of themselves.
2 notes
·
View notes
Text
Gonna piss myself Google AI said if you get sprayed by a skunk take a shower that kills you
5 notes
·
View notes
Text
Google ai told me the other day that 98 inches was between 2 and 3 feet. I laughed hysterically then wondered where the ever-loving fuck it got that answer from.
no i don't want to use your ai assistant. no i don't want your ai search results. no i don't want your ai summary of reviews. no i don't want your ai feature in my social media search bar (???). no i don't want ai to do my work for me in adobe. no i don't want ai to write my paper. no i don't want ai to make my art. no i don't want ai to edit my pictures. no i don't want ai to learn my shopping habits. no i don't want ai to analyze my data. i don't want it i don't want it i don't want it i don't fucking want it i am going to go feral and eat my own teeth stop itttt
#shitty ai#ai needs to fuck off#fuck ai#could have done the math myself#but I was feeling lazy#I guess a crappy ai answer is the price I paid
132K notes
·
View notes
Text
Bossware is unfair (in the legal sense, too)
You can get into a lot of trouble by assuming that rich people know what they're doing. For example, might assume that ad-tech works – bypassing peoples' critical faculties, reaching inside their minds and brainwashing them with Big Data insights, because if that's not what's happening, then why would rich people pour billions into those ads?
https://pluralistic.net/2020/12/06/surveillance-tulip-bulbs/#adtech-bubble
You might assume that private equity looters make their investors rich, because otherwise, why would rich people hand over trillions for them to play with?
https://thenextrecession.wordpress.com/2024/11/19/private-equity-vampire-capital/
The truth is, rich people are suckers like the rest of us. If anything, succeeding once or twice makes you an even bigger mark, with a sense of your own infallibility that inflates to fill the bubble your yes-men seal you inside of.
Rich people fall for scams just like you and me. Anyone can be a mark. I was:
https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security
But though rich people can fall for scams the same way you and I do, the way those scams play out is very different when the marks are wealthy. As Keynes had it, "The market can remain irrational longer than you can remain solvent." When the marks are rich (or worse, super-rich), they can be played for much longer before they go bust, creating the appearance of solidity.
Noted Keynesian John Kenneth Galbraith had his own thoughts on this. Galbraith coined the term "bezzle" to describe "the magic interval when a confidence trickster knows he has the money he has appropriated but the victim does not yet understand that he has lost it." In that magic interval, everyone feels better off: the mark thinks he's up, and the con artist knows he's up.
Rich marks have looong bezzles. Empirically incorrect ideas grounded in the most outrageous superstition and junk science can take over whole sections of your life, simply because a rich person – or rich people – are convinced that they're good for you.
Take "scientific management." In the early 20th century, the con artist Frederick Taylor convinced rich industrialists that he could increase their workers' productivity through a kind of caliper-and-stopwatch driven choreographry:
https://pluralistic.net/2022/08/21/great-taylors-ghost/#solidarity-or-bust
Taylor and his army of labcoated sadists perched at the elbows of factory workers (whom Taylor referred to as "stupid," "mentally sluggish," and as "an ox") and scripted their motions to a fare-the-well, transforming their work into a kind of kabuki of obedience. They weren't more efficient, but they looked smart, like obedient robots, and this made their bosses happy. The bosses shelled out fortunes for Taylor's services, even though the workers who followed his prescriptions were less efficient and generated fewer profits. Bosses were so dazzled by the spectacle of a factory floor of crisply moving people interfacing with crisply working machines that they failed to understand that they were losing money on the whole business.
To the extent they noticed that their revenues were declining after implementing Taylorism, they assumed that this was because they needed more scientific management. Taylor had a sweet con: the worse his advice performed, the more reasons their were to pay him for more advice.
Taylorism is a perfect con to run on the wealthy and powerful. It feeds into their prejudice and mistrust of their workers, and into their misplaced confidence in their own ability to understand their workers' jobs better than their workers do. There's always a long dollar to be made playing the "scientific management" con.
Today, there's an app for that. "Bossware" is a class of technology that monitors and disciplines workers, and it was supercharged by the pandemic and the rise of work-from-home. Combine bossware with work-from-home and your boss gets to control your life even when in your own place – "work from home" becomes "live at work":
https://pluralistic.net/2021/02/24/gwb-rumsfeld-monsters/#bossware
Gig workers are at the white-hot center of bossware. Gig work promises "be your own boss," but bossware puts a Taylorist caliper wielder into your phone, monitoring and disciplining you as you drive your wn car around delivering parcels or picking up passengers.
In automation terms, a worker hitched to an app this way is a "reverse centaur." Automation theorists call a human augmented by a machine a "centaur" – a human head supported by a machine's tireless and strong body. A "reverse centaur" is a machine augmented by a human – like the Amazon delivery driver whose app goads them to make inhuman delivery quotas while punishing them for looking in the "wrong" direction or even singing along with the radio:
https://pluralistic.net/2024/08/02/despotism-on-demand/#virtual-whips
Bossware pre-dates the current AI bubble, but AI mania has supercharged it. AI pumpers insist that AI can do things it positively cannot do – rolling out an "autonomous robot" that turns out to be a guy in a robot suit, say – and rich people are groomed to buy the services of "AI-powered" bossware:
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
For an AI scammer like Elon Musk or Sam Altman, the fact that an AI can't do your job is irrelevant. From a business perspective, the only thing that matters is whether a salesperson can convince your boss that an AI can do your job – whether or not that's true:
https://pluralistic.net/2024/07/25/accountability-sinks/#work-harder-not-smarter
The fact that AI can't do your job, but that your boss can be convinced to fire you and replace you with the AI that can't do your job, is the central fact of the 21st century labor market. AI has created a world of "algorithmic management" where humans are demoted to reverse centaurs, monitored and bossed about by an app.
The techbro's overwhelming conceit is that nothing is a crime, so long as you do it with an app. Just as fintech is designed to be a bank that's exempt from banking regulations, the gig economy is meant to be a workplace that's exempt from labor law. But this wheeze is transparent, and easily pierced by enforcers, so long as those enforcers want to do their jobs. One such enforcer is Alvaro Bedoya, an FTC commissioner with a keen interest in antitrust's relationship to labor protection.
Bedoya understands that antitrust has a checkered history when it comes to labor. As he's written, the history of antitrust is a series of incidents in which Congress revised the law to make it clear that forming a union was not the same thing as forming a cartel, only to be ignored by boss-friendly judges:
https://pluralistic.net/2023/04/14/aiming-at-dollars/#not-men
Bedoya is no mere historian. He's an FTC Commissioner, one of the most powerful regulators in the world, and he's profoundly interested in using that power to help workers, especially gig workers, whose misery starts with systemic, wide-scale misclassification as contractors:
https://pluralistic.net/2024/02/02/upward-redistribution/
In a new speech to NYU's Wagner School of Public Service, Bedoya argues that the FTC's existing authority allows it to crack down on algorithmic management – that is, algorithmic management is illegal, even if you break the law with an app:
https://www.ftc.gov/system/files/ftc_gov/pdf/bedoya-remarks-unfairness-in-workplace-surveillance-and-automated-management.pdf
Bedoya starts with a delightful analogy to The Hawtch-Hawtch, a mythical town from a Dr Seuss poem. The Hawtch-Hawtch economy is based on beekeeping, and the Hawtchers develop an overwhelming obsession with their bee's laziness, and determine to wring more work (and more honey) out of him. So they appoint a "bee-watcher." But the bee doesn't produce any more honey, which leads the Hawtchers to suspect their bee-watcher might be sleeping on the job, so they hire a bee-watcher-watcher. When that doesn't work, they hire a bee-watcher-watcher-watcher, and so on and on.
For gig workers, it's bee-watchers all the way down. Call center workers are subjected to "AI" video monitoring, and "AI" voice monitoring that purports to measure their empathy. Another AI times their calls. Two more AIs analyze the "sentiment" of the calls and the success of workers in meeting arbitrary metrics. On average, a call-center worker is subjected to five forms of bossware, which stand at their shoulders, marking them down and brooking no debate.
For example, when an experienced call center operator fielded a call from a customer with a flooded house who wanted to know why no one from her boss's repair plan system had come out to address the flooding, the operator was punished by the AI for failing to try to sell the customer a repair plan. There was no way for the operator to protest that the customer had a repair plan already, and had called to complain about it.
Workers report being sickened by this kind of surveillance, literally – stressed to the point of nausea and insomnia. Ironically, one of the most pervasive sources of automation-driven sickness are the "AI wellness" apps that bosses are sold by AI hucksters:
https://pluralistic.net/2024/03/15/wellness-taylorism/#sick-of-spying
The FTC has broad authority to block "unfair trade practices," and Bedoya builds the case that this is an unfair trade practice. Proving an unfair trade practice is a three-part test: a practice is unfair if it causes "substantial injury," can't be "reasonably avoided," and isn't outweighed by a "countervailing benefit." In his speech, Bedoya makes the case that algorithmic management satisfies all three steps and is thus illegal.
On the question of "substantial injury," Bedoya describes the workday of warehouse workers working for ecommerce sites. He describes one worker who is monitored by an AI that requires him to pick and drop an object off a moving belt every 10 seconds, for ten hours per day. The worker's performance is tracked by a leaderboard, and supervisors punish and scold workers who don't make quota, and the algorithm auto-fires if you fail to meet it.
Under those conditions, it was only a matter of time until the worker experienced injuries to two of his discs and was permanently disabled, with the company being found 100% responsible for this injury. OSHA found a "direct connection" between the algorithm and the injury. No wonder warehouses sport vending machines that sell painkillers rather than sodas. It's clear that algorithmic management leads to "substantial injury."
What about "reasonably avoidable?" Can workers avoid the harms of algorithmic management? Bedoya describes the experience of NYC rideshare drivers who attended a round-table with him. The drivers describe logging tens of thousands of successful rides for the apps they work for, on promise of "being their own boss." But then the apps start randomly suspending them, telling them they aren't eligible to book a ride for hours at a time, sending them across town to serve an underserved area and still suspending them. Drivers who stop for coffee or a pee are locked out of the apps for hours as punishment, and so drive 12-hour shifts without a single break, in hopes of pleasing the inscrutable, high-handed app.
All this, as drivers' pay is falling and their credit card debts are mounting. No one will explain to drivers how their pay is determined, though the legal scholar Veena Dubal's work on "algorithmic wage discrimination" reveals that rideshare apps temporarily increase the pay of drivers who refuse rides, only to lower it again once they're back behind the wheel:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
This is like the pit boss who gives a losing gambler some freebies to lure them back to the table, over and over, until they're broke. No wonder they call this a "casino mechanic." There's only two major rideshare apps, and they both use the same high-handed tactics. For Bedoya, this satisfies the second test for an "unfair practice" – it can't be reasonably avoided. If you drive rideshare, you're trapped by the harmful conduct.
The final prong of the "unfair practice" test is whether the conduct has "countervailing value" that makes up for this harm.
To address this, Bedoya goes back to the call center, where operators' performance is assessed by "Speech Emotion Recognition" algorithms, a psuedoscientific hoax that purports to be able to determine your emotions from your voice. These SERs don't work – for example, they might interpret a customer's laughter as anger. But they fail differently for different kinds of workers: workers with accents – from the American south, or the Philippines – attract more disapprobation from the AI. Half of all call center workers are monitored by SERs, and a quarter of workers have SERs scoring them "constantly."
Bossware AIs also produce transcripts of these workers' calls, but workers with accents find them "riddled with errors." These are consequential errors, since their bosses assess their performance based on the transcripts, and yet another AI produces automated work scores based on them.
In other words, algorithmic management is a procession of bee-watchers, bee-watcher-watchers, and bee-watcher-watcher-watchers, stretching to infinity. It's junk science. It's not producing better call center workers. It's producing arbitrary punishments, often against the best workers in the call center.
There is no "countervailing benefit" to offset the unavoidable substantial injury of life under algorithmic management. In other words, algorithmic management fails all three prongs of the "unfair practice" test, and it's illegal.
What should we do about it? Bedoya builds the case for the FTC acting on workers' behalf under its "unfair practice" authority, but he also points out that the lack of worker privacy is at the root of this hellscape of algorithmic management.
He's right. The last major update Congress made to US privacy law was in 1988, when they banned video-store clerks from telling the newspapers which VHS cassettes you rented. The US is long overdue for a new privacy regime, and workers under algorithmic management are part of a broad coalition that's closer than ever to making that happen:
https://pluralistic.net/2023/12/06/privacy-first/#but-not-just-privacy
Workers should have the right to know which of their data is being collected, who it's being shared by, and how it's being used. We all should have that right. That's what the actors' strike was partly motivated by: actors who were being ordered to wear mocap suits to produce data that could be used to produce a digital double of them, "training their replacement," but the replacement was a deepfake.
With a Trump administration on the horizon, the future of the FTC is in doubt. But the coalition for a new privacy law includes many of Trumpland's most powerful blocs – like Jan 6 rioters whose location was swept up by Google and handed over to the FBI. A strong privacy law would protect their Fourth Amendment rights – but also the rights of BLM protesters who experienced this far more often, and with far worse consequences, than the insurrectionists.
The "we do it with an app, so it's not illegal" ruse is wearing thinner by the day. When you have a boss for an app, your real boss gets an accountability sink, a convenient scapegoat that can be blamed for your misery.
The fact that this makes you worse at your job, that it loses your boss money, is no guarantee that you will be spared. Rich people make great marks, and they can remain irrational longer than you can remain solvent. Markets won't solve this one – but worker power can.
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#alvaro bedoya#ftc#workers#algorithmic management#veena dubal#bossware#taylorism#neotaylorism#snake oil#dr seuss#ai#sentiment analysis#digital phrenology#speech emotion recognition#shitty technology adoption curve
2K notes
·
View notes
Text
baking poing so d dac
jim pow baking soda
1 note
·
View note
Text
Once more, this is ai. You can see it if you zoom in. I was a nature photographer once, so the fact that nobody talks about photography ai shit, and that it's just as much art theft as with drawings is so fucking frustrating Ffs
#Shitty ai#Once again#And OP didn't Tag it as such of course#Also comments are blocked#I wonder why lol#Ai#Ai art#I don't even wanna call it ai art there's no art in this#The nerve to tag it photography
12K notes
·
View notes
Text
Hey guys do you like my SMG3 X SMG4 Song fanart?
#smg4#smg3#smg34#mango art#I couldn't stop thinking of this fucking song its like. rotted me#can't believe all it takes to push me to draw gay men kissing is a shitty AI content farm song. that's just how I am
780 notes
·
View notes
Text
Repeat after me:
AI art is NOT devotional art
AI is not witchcraft
#anti AI#boost#kemetic#kemetism#witchcraft#witchblr#sea witch#paganism#see way too much ai shit#even if its the intention to talk about a deity or something#its still shitty#it shows that you're lazy#there's all this wonderful art made by real people#and you choose to spread shit that wasn't made by humans who put their heart and soul#yall need to stop worrying about this imposter spirit bullshit#and call out AI slop when you see it
576 notes
·
View notes
Text
let him be pink!!
#komahina#nagito komaeda#hajime hinata#danganronpa#danganronpa 2#sdr2#my art#my posts#digital#described#the bg was vaguely inspired by arcane and shitty wand select tools#and now i fear between that and my conlang it looks like ai. just trust me on this one ok i would never#i draw my yaoi with my own 2 hands
385 notes
·
View notes
Text
the brainrot is extreme. heres another silly doodle (ノ≧▽≦)ノΞ●~*
+ obligatory close up of the cute little faces lol
#extremely shitty resolution soz#saiki k#the disastrous life of saiki k.#digital art#dirtbag's saiki k vault >:)#this may or may not be a scene from something im writing... youll see.....#and if anyone asks the poor quality is just to prevent ppl from using it to train ai hahaha'''''''#def not.... just because im bad with technology.....#OH MY GOD WHO SAID THAT????
485 notes
·
View notes
Text
it's such a tragic irony that swifties, who rallied so hard for regulations on AI usage - because we all knew and experienced how dangerous it can be in the worst possible way, still continue to use AI to generate songs and alarm sounds and album/song covers using taylor's voice like it doesn't violate her artistry at all. and then defend it by saying this is different because they're doing it with positive intentions. there's literally nothing productive about using AI to generate any kind of content, especially not when that content is a blatant (and disgustingly shallow) impersonation of someone swifties claim to love and respect so much
#meg talks#Why the fuck are y'all still making songs using AI do we not know how fucked up it is? Not to mention the song is SHITTY af lmao just stop?#Using her voice to make her say anything you want is creepy disgusting and terrifying#taylor swift
900 notes
·
View notes
Text
FUCK DO YOU REMEMBER GAVIN REED FROM DBH??
If I remember correctly he's somewhat gen z. I UNDERSTAND NOW WHY HE HATES ANDROIDS SO MUCH HE LIVED THROUGH AI-POCALIPSES
#I MEAN LITERALLY#HOW WANY ART OR SHITTY AI SONGS HE SAW AND HEARD#I'm on his side on this#totally#what a man#gavin reed#detroit become human#dbh
579 notes
·
View notes
Text
i admit i still had my contact lenses in (gr8 for far away, sucky for close-up vision) when I saw this, soooooo.....in my defense, uhh?
....uhhhh
...i see, well then!!
quite the poster tho! (hold up is that frond just floating in midair???)
....hepl i feel unsafe 💀😭
360 notes
·
View notes
Text
RED VELVET COSMIC, 2024
#red velvet#redvelvetinc#femaleidolsedit#femaleidol#femadolsedit#kgoddesses#ggnet#rvedit#rv#99#09#edits#still cant believe sm gave us shitty ai backgrounds when they couldve used just sailor and goddess sets#the fact that these are in grass just look 100% better than the others
381 notes
·
View notes